149 research outputs found

    Three-Dimensional Medical Image Fusion with Deformable Cross-Attention

    Full text link
    Multimodal medical image fusion plays an instrumental role in several areas of medical image processing, particularly in disease recognition and tumor detection. Traditional fusion methods tend to process each modality independently before combining the features and reconstructing the fusion image. However, this approach often neglects the fundamental commonalities and disparities between multimodal information. Furthermore, the prevailing methodologies are largely confined to fusing two-dimensional (2D) medical image slices, leading to a lack of contextual supervision in the fusion images and subsequently, a decreased information yield for physicians relative to three-dimensional (3D) images. In this study, we introduce an innovative unsupervised feature mutual learning fusion network designed to rectify these limitations. Our approach incorporates a Deformable Cross Feature Blend (DCFB) module that facilitates the dual modalities in discerning their respective similarities and differences. We have applied our model to the fusion of 3D MRI and PET images obtained from 660 patients in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Through the application of the DCFB module, our network generates high-quality MRI-PET fusion images. Experimental results demonstrate that our method surpasses traditional 2D image fusion methods in performance metrics such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Importantly, the capacity of our method to fuse 3D images enhances the information available to physicians and researchers, thus marking a significant step forward in the field. The code will soon be available online

    Overexpression of Optic Atrophy Type 1 Protects Retinal Ganglion Cells and Upregulates Parkin Expression in Experimental Glaucoma

    Get PDF
    Glaucoma is a neurodegenerative disease that features progressive loss of retinal ganglion cells (RGCs). Increasing evidences have revealed that impaired mitochondrial dynamics occurs early in neurodegenerative diseases. Optic Atrophy Type 1 (OPA1), a mitochondrial fusion protein, has recently been suggested to be a mitophagic factor. Our previous studies found that glaucomatous retinal damage may be ameliorated by an increase in mitochondrial OPA1. In this study, we explored the mechanism involved in OPA1 mediated neuroprotection and its relationship with parkin dependent mitophagy in experimental glaucoma models. Our data showed that overexpression of OPA1 by viral vectors protected against RGC loss, attenuated Bax expression, and improved mitochondrial health and mitochondrial surface area. Parkin expression and the number of mitophagosomes were upregulated in OPA1 overexpressed RGCs under glutamate excitotoxicity. While knockdown of OPA1 by siRNA decreased protein expression of parkin in RGCs under glutamate excitotoxicity. Two weeks after intraocular pressure (IOP) elevation, the LC3-II/I ratio and the LAMP1 expression were increased in OPA1 overexpressed optic nerve. These findings suggest that OPA1 overexpression may protect RGCs by ways of enhancing mitochondria fusion and parkin mediated mitophagy. Interventions to promote mitochondrial fusion and mitophagy may provide a useful strategy to battle against glaucomatous RGC loss

    Predictive value of HFA-PEFF score in patients with heart failure with preserved ejection fraction

    Get PDF
    HFA-PEFF score has been proposed for diagnosing heart failure with preserved ejection fraction (HFpEF). Currently, there are only a limited number of tools for predicting the prognosis. In this study, we evaluated whether the HFA-PEFF score can predict mortality in patients with HFpEF. This single-center, retrospective observational study enrolled patients diagnosed with HFpEF at the First Affiliated Hospital of Dalian Medical University between January 1, 2015, and April 30, 2018. The subjects were divided according to their HFA-PEFF score into low (0-2 points), intermediate (3-4 points), and high (5-6 points) score groups. The primary outcome was all-cause mortality. A total of 358 patients (mean age: 70.21 ± 8.64 years, 58.1% female) were included. Of these, 63 (17.6%), 156 (43.6%), and 139 (38.8%) were classified into the low, intermediate, and high score groups, respectively. Over a mean follow-up of 26.9 months, 46 patients (12.8%) died. The percentage of patients who died in the low, intermediate, and high score groups were 1 (1.6%), 18 (11.5%), and 27 (19.4%), respectively. A multivariate Cox regression identified HFA-PEFF score as an independent predictor of all-cause mortality [hazard ratio ( ):1.314, 95% : 1.013-1.705, = 0.039]. A Cox analysis demonstrated a significantly higher rate of mortality in the intermediate ( : 4.912, 95% 1.154-20.907, = 0.031) and high score groups ( : 5.291, 95% : 1.239-22.593, = 0.024) than the low score group. A receiver operating characteristic (ROC) analysis indicated that the HFA-PEFF score can effectively predict all-cause mortality after adjusting for age and New York Heart Association (NYHA) class [area under the curve (AUC) 0.726, 95% 0.651-0.800, = 0.000]. With an HFA-PEFF score cut-off value of 3.5, the sensitivity and specificity were 78.3 and 54.8%, respectively. The AUC on ROC analysis for the biomarker component of the score was similar to that of the total score. The HFA-PEFF score can be used both to diagnose HFpEF and predict the prognosis. The higher scores are associated with higher all-cause mortality. [Abstract copyright: Copyright © 2021 Sun, Si, Li, Dai, King, Zhang, Zhang, Xia, Tse and Liu.

    Investigating mechanism of inclined CPT in granular ground using DEM

    Get PDF
    Abstract. This paper presents an investigation on mechanism of the inclined 1 cone penetration test (CPT) using the numerical discrete element method (DEM). 2 A series of penetration tests with the penetrometer inclined at different angles 3 (i.e., 0°,15°, 30°, 45° and 60°) were numerically performed under µ=0.0 and 4 µ=0.5, where µ is the frictional coefficient between the penetrometer and the soil. 5 The deformation patterns, displacements of soil particles adjacent to the cone tip, 6 velocity fields, rotations of the principal stresses and the averaged pure rotation 7 rate (APR) were analyzed. Special focus was placed on the effect of friction. The 8 DEM results showed that soils around the cone tip experienced complex 9 displacement paths at different positions as the inclined penetration proceeded, 10 and the friction only had significant effects on the soils adjacent to the 11 penetrometer side and tip. Soils exhibited characteristic velocity fields 12 corresponding to three different failure mechanisms and the right side was easier 13 to be disturbed by friction. Friction started to play its role when the tip approached 14 the observation points, while it had little influence on rotation rate. The 15 normalized tip resistance (q c = f /σ v0 ) increased with friction as well as inclination 16 angle. The relationship between q c and relative depth (y/R) can be described as q c 17 =a×(y/R) -b , with parameters a and b dependent on penetration direction. The 18 normalized resistance perpendicular to the penetrometer axis q p increases with the 19 inclination angle, thus the inclination angle should be carefully selected to ensure 20 the penetrometer not to deviate from its original direction or even be broken in 21 real tests. 2

    Point2PartVolume: Human body volume estimation from a single depth image

    Get PDF
    Human body volume is a useful biometric feature for human identification and an important medical indicator for monitoring body health. Traditional body volume estimation techniques such as underwater weighing and air displacement demand a lot of equipment, and are difficult to be performed under some circumstances, e.g. in clinical environments when dealing with bedridden patients. In this contribution, a novel vision-based method dubbed Point2PartVolume based on deep learning is proposed to rapidly and accurately predict the part-aware body volumes from a single depth image of the dressed body. Firstly, a novel multi-task neural network is proposed for jointly completing the partial body point clouds, predicting the body shape under clothing, and semantically segmenting the reconstructed body into parts. Next, the estimated body segments are fed into the proposed volume regression network to estimate the partial volumes. A simple yet efficient two-step training strategy is proposed for improving the accuracy of volume prediction regressed from point clouds. Compared to existing methods, the proposed method addresses several major challenges in vision-based human body volume estimation, including shape completion, pose estimation, body shape estimation under clothing, body segmentation, and volume regression from point clouds. Experimental results on both the synthetic data and public real-world data show our method achieved average 90% volume prediction accuracy and outperformed the relevant state-of-the-art
    • …
    corecore